Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent
نویسنده
چکیده
In this paper, we revisit the recently established theoretical guarantees for the convergence of the Langevin Monte Carlo algorithm of sampling from a smooth and (strongly) log-concave density. We improve the existing results when the convergence is measured in the Wasserstein distance and provide further insights on the very tight relations between, on the one hand, the Langevin Monte Carlo for sampling and, on the other hand, the gradient descent for optimization. Finally, we also establish guarantees for the convergence of a version of the Langevin Monte Carlo algorithm that is based on noisy evaluations of the gradient.
منابع مشابه
User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient
In this paper, we revisit the recently established theoretical guarantees for the convergence of the Langevin Monte Carlo algorithm of sampling from a smooth and (strongly) log-concave density. We improve, in terms of constants, the existing results when the accuracy of sampling is measured in the Wasserstein distance and provide further insights on relations between, on the one hand, the Lange...
متن کاملSampling from a log-concave distribution with Projected Langevin Monte Carlo
We extend the Langevin Monte Carlo (LMC) algorithm to compactly supported measures via a projection step, akin to projected Stochastic Gradient Descent (SGD). We show that (projected) LMC allows to sample in polynomial time from a log-concave distribution with smooth potential. This gives a new Markov chain to sample from a log-concave distribution. Our main result shows in particular that when...
متن کاملBayesian Learning via Stochastic Gradient Langevin Dynamics
In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic gradient optimization algorithm we show that the iterates will converge to samples from the true posterior distribution as we anneal the stepsize. This seamless transition between optimization and Bayesi...
متن کاملStochastic Gradient Hamiltonian Monte Carlo with Variance Reduction for Bayesian Inference
Gradient-based Monte Carlo sampling algorithms, like Langevin dynamics and Hamiltonian Monte Carlo, are important methods for Bayesian inference. In large-scale settings, full-gradients are not affordable and thus stochastic gradients evaluated on mini-batches are used as a replacement. In order to reduce the high variance of noisy stochastic gradients, [Dubey et al., 2016] applied the standard...
متن کاملInvestigation of Monte Carlo, Molecular Dynamic and Langevin dynamic simulation methods for Albumin- Methanol system and Albumin-Water system
Serum Albumin is the most aboundant protein in blood plasma. Its two major roles aremaintaining osmotic pressure and depositing and transporting compounds. In this paper,Albumin-methanol solution simulation is carried out by three techniques including MonteCarlo (MC), Molecular Dynamic (MD) and Langevin Dynamic (LD) simulations. Byinvestigating energy changes by time and temperature (between 27...
متن کامل